perm filename DENNET[F79,JMC]2 blob
sn#525118 filedate 1980-07-19 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Comments on %2Brainstorms%1.
C00004 00003 Comments from a common point of view
C00031 ENDMK
C⊗;
Comments on %2Brainstorms%1.
Either artificial intelligence has had a major effect on philosophy
or there is some major parallelism in different areas of thought. Of course,
Actually the details of the artificial intelligence research that
has been carried out don't matter as far as the philosophical ideas go.
Merely taking seriously the project of making computers behave
intelligently is transforming philosophy.
1. Minor point - or at least I think it's minor. There is no need to
ascribe perfect rationality to chess programs. It doesn't seem to me
that lack of perfect rationality is necessarily a defect in the
intentional system itself, although such a defect would produce
irrationality.
Comments from a common point of view
5 Dennett need not assume that the computer will choose the most rational
move. All that is important is that the behavior be predictable by
ascribing some ideas and goals to the program. For example, if one
were told that a certain program valued rooks no more highly than
knights, one could use it to predict behavior. One might also be
told that it either liked or disliked queen sacrifices.
Perhaps if one knows nothing about the program, one should assume
perfect rationality, but without knowing any details of a program,
I would bet that a random program would not attack a strong defensive
position and would waste time permitting its opponent to prepare an
attack behind a screen. This quibble doesn't affect any of Dennett's
major points.
17 I don't see how a "system for false beliefs" means something different
than a pervasive delusion like Marxism, and one sees them all the time.
38 Here as elsewhere, Dennett mistakenly suggests that two very different
ascriptions of belief may be consistent with a whole biography. The
boundary between admitting and not admitting two readings for a
simple substitution cipher with word boundaries, is probably less
than 15 letters.
41 The inner language almost certainly has untranslatable demonstratives
and doesn't really use tokens. A person's mentalese is
idiosyncratic, and we have to translate in order to express our
thoughts in words.
45 Ingenuity helps in reducing the number of facts that have to
be remembered. For example, the obvious way of specifying that
⊗n objects are all different requires ⊗n(n-1)/2 sentences, but
if each is given a distinct integer index, then their inequality
follows from the inequality of the indices, and this follows
from the transitivity, totality and irreflexivity of <.
46 The Lewis Carroll argument shows that some information has
to be represented in active hardware rather than in written rules.
However, this is a constant and rather small amount. A brain could,
as far as Carroll's argument goes,
be a 4 symbol 7 state Turing machine with all other
information written on the tape. There is other engineering
and physiological evidence that the active part is much larger.
49 It seems to me that there are yet other ascriptions of belief
that would account for Sam's behavior. Given the elaborate model
described in the chapter on pain, there is room for all kinds of
levels of belief.
50 Various number systems are compatible with the ability to add,
but there isn't really an infinite variety unless we allow systems
of representation in which the amount of work required to transform
numbers into a form that can be added by small hardware is much
larger that the work involved in addition itself.
56 I keep thinking of trying to rescue the doctor of %2Le Malade
Imaginaire%1 by asking what would be required before %2virtus
dormitiva%1 was a good expangatin for the effects of sleeping
powders. Let us imagine that it was like a radioactive tracer.
It could not be extracted in visible material form but was
conserved in a wide variety of chemical reactions and which
way it went in an extraction process could be determined
by extracting it with alcohol (for which it had an
affinity) and giving a bioassay with patients or mice.
That would be enough for quite a good theory. If one
were merely contemplating such a theory, much less evidence
would be enough to make the idea interesting. The model
of theorizing as taking out a loan fits well here.
63 Skinner seems to attach probabilities to behaviors, but there
would seem to be too many potential behaviors for these probabilities
to be individually learned and stored. Therefore, the probability
of a whole behavior has to be determined in some complicated way
from numbers (not necessarily probabilities) and other data associated
with some smaller feasible number of components.
65 One persistent blindness won't refute the idea that the wasp is
clever any more than we can refute the theory of relativity by
noticing a persistent grammatical error in Einstein's 1905 paper.
70 The last sentence seems to be an appeal to be considered a "good guy".
Skinner's proposals might have interesting aspects, although I
didn't find anything but wishful thinking in %2Walden 2%1. If
Dennett would do the penance of reading Skinner's proposals objectively
and carefully, I would be grateful if he would let me know if he
found anything interesting.
71 The division of labor between the generator and evaluator is
more complex than that in game playing programs. However, in chess programs
it appears that the weaknesses are more in the generator than in
the evaluator.
89 A pattern matcher does generate and test internally, but in its
external behavior as a computing element, it just matches. Thus
intentionally it is a matcher.
93 Your mentalese concept of airplane may be quite different from mine.
The amount of overlap required for communication isn't so great.
101 I think that "representation relative to a theory" as described
in my %2Ascribing Mental Qualities to Machines%1 is more nearly what
is required than "something is a representation on ⊗for or ⊗to someone".
109 Dennett is often overoptimistic about what AI has actually achieved.
123 "Representations that understand themselves" is mysterious
to me. Also understanding is a quality of the whole system. Its
parts may not individually understand anything. I must admit that
the discussion of understanding in %2Ascribing ...%1 is cursory,
and my ideas haven't advanced much.
126 Few see much connection between Minsky's frames and the frame
problem of McCarthy and Hayes. There is a tenuous connection though.
139 This reminds me of an SF situation. Crowdedness causes people's
minds to be embodied in microcomputers, and one spends part of one's
conscious life in simulated environments and only ventures out into
the real world in a body when one has saved up to pay the costs.
Eventually someone discovers how to update a consciousness by a month
or a year without actually going through the intermediate stages.
Is that acceptable to the people? The extreme is, "Why run the
program at all? Why isn't the mathematical possibility, even if
no-one and nothing ever computes it, just as satisfactory? Why
aren't we satisfied with "might have beens". What's wrong with
daydreaming?
150 "Is it like something to be an X" strikes me as a wrong question.
152 See Hofstadter's "Ant Hillary".
160 What information is in a "speech act command"?
163 Information theory in the sense of coding theory as originated
by Shannon has little to say of interest. There is no reason
to suppose that the information in the brain or that transmitted
around the brain is coded efficiently. The number of bits in a
message is usually approximately proportional to its length, and
the additional information given by sophisticated information
theory may not address any important question.
A more interesting question is that of labelled information.
Suppose that the length of a road is 6.7 miles. In some of its
travels this information can be represented just by the number
6.7, e.g. in its tranfer to a specialized unit for converting
miles to kilometers that will send 10.78 back along
another specialized line. In other travels the information
should be represented as (miles 6.7) and can be handled by
units that take distances expressed in arbitrary known units.
In still others, the information might be (equal (distance
Lynchville Danvill) (miles 6.7)), and in still others that
information may be further decorated with its purpose. This
is a design decision, and the trend in AI is perhaps to decorate
the information more at the expense of efficiency. What the
brain does is mostly unknown, but there is an analogous phenomenon
observed in the growth of the nervous system in that nerve fibers
are apparently coded so that the right fiber connects with the
right peripheral muscle area when a nerve regenerates.
172 Whether an entity has an inner life seems to depend on
whether it has a rich array of beliefs and goals about beliefs
and goals.
173 Who are the "best of the introspectionists". I'd like to read
some of their works.
181 It would be interesting to work out a theory of "intentional
objects with (possibly) contradictory properties".
190 The account of the accomplishments of AI makes me a little
nervous. "superb checkers" may be putting it too strongly.
"good chess" is ok. "novel and unexpected proofs of non-trivial
theorems" may be exaggerated, and "can conduct sophisticated
conversations" is definitely exaggerated considering what is
said elsewhere in the book about the limitations of SHRDLU.
The trouble is that AIers don't yet understand intentionality,
and Dennett isn't putting its facts into a form that is directly
ueeful.
I wonder which skeptics really committed themselves against the
possibility of good chess. I guess maybe Dreyfuss did at one
point, but he gave that up in 1967 or thereabouts.
It is impossible to see what new hardware would do given that it
could be simulated on the old.
191 If the simulation didn't take account of the effects of
hurricanes on fishing boats, the fishermen would find it
incomplete. There is, however, a worthwhile distinction between
putting in enough features so that the main phenomena being
studied can be followed (kind of closedness or completeness of
the model) and making it complete enough for some application.
As far as the main simulation of the storm goes, the effects on
the fishing boats are an epiphenomenon, since they don't appreciably
react back on the storm.
194 Winograd's program probably has shortcuts so that the robot
and its environment are inadequately distinguished.
197 Can the cabinetmaker make a functional copy of a Hepplewhite chair?
That is the analogous question.
198 My problem is that I don't have pain well enough defined intentionally.
After reading all the genuinely relevant physiological detail in this
article, I understand why.
238 Again the state of the art is exaggerated. I don't know any
"evolving programs capable of self-improvement".
239 I don't think our initial attitude towards a person is
anything like assuming perfect rationality. I generally expect
a somewhat muddled person and am pleased and surprised if he
or she impresses me as really rational.
246 Again I don't see why it isn't convenient to describe lapses of
rationality entirely in intentional terms rather than having to
go back to the physical level. I guess I think most such lapses
are best described as mistaken beliefs than as malfunctions of the
hardware. Certainly Freudian psychology is an intentional theory,
which is does not assume full rationality.
*****
254 It seems to me that here is a major error. There is no reason
why a computer program can't have a full description of itself both
in software and hardware. There is no paradox because the very
same software that is interpreted by the machine can be read by
the program when questions have to be asked about it. To include
a description of the machine is not much additional information.
The machine, with the aid of tape, can compute its own responses
to an initial situation, and interpreters that can interpret their
own code are common. A paradox arises only when you ask whether
the machine can predict ahead of itself and do the opposite of
what it predicts. Self-simulation is possible but must run
slower than real time. As an exercise in the double use of information
write a LISP program whose output is a copy of itself. A harder
similar exercise is to write such a FORTRAN program.
*****
256 I agree with the main conclusion of this chapter - that Goedel's
theorem cannot be used to conclude that humans posess abilities
that machines can't have. However the arguments given seem wrong
to me and I will propose substitutes. In particular, some of the
appeals to recursive function theory appear to misstate the theorems
used.
256 The chapter on "The abilities of men and machines" seems to rely
mainly on mistaken arguments even though I agree with its conclusions.
In the first place, Goedel sentences are not associated with
Turing machines; they are associated with theories. A theory can
in some sense be implemented by a Turing machine that enumerates
theorems or checks proofs in the theory, but not all Turing machines
have this character. In particular, a universal Turing machine
implements no particular theory. Any theory can be implemented by
a suitable program.
It is important to distinguish Goedel's theorem from the
Goedel sentence of a theory. Goedel's theorem is a single theorem
and can be known by a machine. In principal it can be proved by
a computer program, but present programs are not smart enough to
do it in any honest sense.
The correct argument goes as follows. Suppose I give you
a theory and incidentally a computer program that enumerates the
theorems of the theory. Suppose further that I assure you that
the theory is consistent, since you cannot necessarily satisfy
yourself on this point. Then you can use Goedel's theorem to
construct a sentence that my program will never enumerate. The
statement of consistency of my program's theory is just such a
sentence.
However, my program can cut short your glee by saying,
"Now tell me the principles by which you prove sentences of
arithmetic. Then I will construct a sentence that you will
never prove if your theory is consistent.
It is as though I were to propose a contest to see who
can name the largest number and propose that you name yours first.
The amount of mathematics relevant to philosophical consideration
of this problem is even more than that. In my opinion, Feferman's
"Transfinite Progressions of Theories" is of interest. Feferman
points out that any theory of arithmetic can have added to it
"principles of self-confidence". For example, if ⊗T is a theory,
you can get a new theory ⊗T' by adding a sentence whose content
is that ⊗T is consistent. Repeating the process gives you
the sequence ⊗T, ⊗T', ⊗T'', etc. all different. Turing studied
such sequences of theories. Feferman proposes
a different self-confidence principle stronger than consistency,
expressed roughly by %2∀n.provable((P(n))_⊃_∀n.P(n)%1. It says
that if you have a sequence of sentences, and you can conclude
that all the sentences are provable, you may conclude the universal
quantification of the senntences. Some fuss with quotation
or Goedel numbers is required to state this precisely. The
process of iterating the theories can be carried into the transfinite,
i.e. you can construct a theory that includes the whole sequence
⊗T, ⊗T', etc. Indeed for any recursive ordinal, the process can
be iterated that far, but at some kinds of limit ordinals, you have
some notational choices to make, and there is no single way that
you can make all these choices once and for all in advance. For this
reason, the process of iteration through the constructive ordinals
cannot be all defined at once and therefore you can't take it to
the limit. Feferman's further shows that there are what he calls
"progressions of theories" such that the limit of the progression
is the set of true sentences of arithmetic. Thus one can say that
all Peano arithmetic lacks is self-confidence, but it lacks an
infinite amount of self-confidence, and no finite amount of
reassurance will complete it. No-one has really studied this
from a philosophical point of view as far as I know.
The philosophically interesting part of this is what Feferman
tells us about how and to what extent we can transcend our limitations.
The self-confidence principles can't be proved within the theories,
but one is inclined to believe them. The unconstructability of
a single definite progression should also have interesting
philosophical consequences.
262 Again we have the improbable assumption that the activities of
an infant or moron admit interpretation as proving the theorems of
arithmetic. However, this error doesn't really contribute to the
main error of the paper.
264 I am not quite sure what this means now, but I am pretty sure
this is not the reason why Goedel's theorem cannot be use to show
the superiority of man to machine.
271 I don't see that the capacity for verbal communication is either
necessary or sufficient for self-consciousness.
288 Let me tell you my mathematics fiction story, %2Good Judgment%1.
297